bipartite expander hopfield network
Bipartite expander Hopfield networks as self-decoding high-capacity error correcting codes
Neural network models of memory and error correction famously include the Hopfield network, which can directly store---and error-correct through its dynamics---arbitrary N-bit patterns, but only for ~N such patterns. On the other end of the spectrum, Shannon's coding theory established that it is possible to represent exponentially many states (~e^N) using N symbols in such a way that an optimal decoder could correct all noise upto a threshold. We prove that it is possible to construct an associative content-addressable network that combines the properties of strong error correcting codes and Hopfield networks: it simultaneously possesses exponentially many stable states, these states are robust enough, with large enough basins of attraction that they can be correctly recovered despite errors in a finite fraction of all nodes, and the errors are intrinsically corrected by the network's own dynamics. The network is a two-layer Boltzmann machine with simple neural dynamics, low dynamic-range (binary) pairwise synaptic connections, and sparse expander graph connectivity. Thus, quasi-random sparse structures---characteristic of important error-correcting codes---may provide for high-performance computation in artificial neural networks and the brain.
Reviews: Bipartite expander Hopfield networks as self-decoding high-capacity error correcting codes
As the authors note, capacity is often balanced against robustness; since code redundancy is needed to enable recovery from noise, capacity is necessarily reduced because of the redundancy, making this an especially difficult problem. The authors rise to this challenge and claim to produce a network that exhibits "exponential capacity, robustness to large errors, and self decoding or clean-up of these errors". There network takes the structure of a restricted Boltzmann machine wherein the hidden units are comprised of clusters of neurons that laterally inhibit each other, each with the same connectivity to the input units but with different weights. The authors motivate their network using inspirations and ideas from expander graphs, error correcting codes, and Hopfield Networks. The proposed solution is straightforward and well-motivated, and all the design and algorithm choices seem quite sensible.
Reviews: Bipartite expander Hopfield networks as self-decoding high-capacity error correcting codes
This paper presents a novel form of associative content addressable (ACA) memory systems. The canonical model for ACA memory is the Hopfield network, which can only store approximately N patterns of N bits. The authors use developments from error-correcting codes (ECCs) to implement an ACA that can store e N, N bit patterns. This is accomplished by using a bipartite expander graph, which is essentially a restricted Boltzmann machine (RBM) wherein the hidden nodes are actually clusters of units that are mutually inhibitory. The authors demonstrate that these networks have dynamics that can engage in error correction similar to ECCs, enabling the storage of exponentially many patterns.
Bipartite expander Hopfield networks as self-decoding high-capacity error correcting codes
Neural network models of memory and error correction famously include the Hopfield network, which can directly store---and error-correct through its dynamics---arbitrary N-bit patterns, but only for N such patterns. On the other end of the spectrum, Shannon's coding theory established that it is possible to represent exponentially many states ( e N) using N symbols in such a way that an optimal decoder could correct all noise upto a threshold. We prove that it is possible to construct an associative content-addressable network that combines the properties of strong error correcting codes and Hopfield networks: it simultaneously possesses exponentially many stable states, these states are robust enough, with large enough basins of attraction that they can be correctly recovered despite errors in a finite fraction of all nodes, and the errors are intrinsically corrected by the network's own dynamics. The network is a two-layer Boltzmann machine with simple neural dynamics, low dynamic-range (binary) pairwise synaptic connections, and sparse expander graph connectivity. Thus, quasi-random sparse structures---characteristic of important error-correcting codes---may provide for high-performance computation in artificial neural networks and the brain.
Bipartite expander Hopfield networks as self-decoding high-capacity error correcting codes
Chaudhuri, Rishidev, Fiete, Ila
Neural network models of memory and error correction famously include the Hopfield network, which can directly store---and error-correct through its dynamics---arbitrary N-bit patterns, but only for N such patterns. On the other end of the spectrum, Shannon's coding theory established that it is possible to represent exponentially many states ( e N) using N symbols in such a way that an optimal decoder could correct all noise upto a threshold. We prove that it is possible to construct an associative content-addressable network that combines the properties of strong error correcting codes and Hopfield networks: it simultaneously possesses exponentially many stable states, these states are robust enough, with large enough basins of attraction that they can be correctly recovered despite errors in a finite fraction of all nodes, and the errors are intrinsically corrected by the network's own dynamics. The network is a two-layer Boltzmann machine with simple neural dynamics, low dynamic-range (binary) pairwise synaptic connections, and sparse expander graph connectivity. Thus, quasi-random sparse structures---characteristic of important error-correcting codes---may provide for high-performance computation in artificial neural networks and the brain.